Deploy a Kubernetes Cluster with Service Mesh on DigitalOcean Using K3s
Table of Contents
This guide will help you set up a Kubernetes cluster, including a service mesh using k3s (at the time of writing at version 0.10.2) and Rio. We’ll deploy the cluster on DigitalOcean. Ubuntu 18.04 LTS will be the OS during this tutorial.
If you sign up to DigitalOcean using this link, you’ll receive $50 to spend on their services over 30 days.
If you wish to run your cluster on your own hardware, you could do so on Raspberry Pis (ARM64) or Intel NUCs (x86_64) for example.
This tutorial will result in a three-node cluster with one Kubernetes master and two workers.
Prepare #
This tutorial assumes you have a domain name. If you don’t have one, you can register one at
Porkbun. In this tutorial, we’ll use example.com
(replace it when you see it).
Next, create three Droplets on DigitalOcean running Ubuntu 18.04 LTS with “Private networking” enabled. Make sure they are all in the same data center.
Create DNS records pointing at your nodes’ (public) IP addresses. How you do this depends on your DNS provider or domain registrar. Please refer to their documentation. In this tutorial, the nodes will have the FQDNs node-1.example.com
, node-2.example.com
, and node-3.example.com
.
Make a note of the private IP address of each node. If you’re using DigitalOcean, you can find the private IP address in the DigitalOcean web console or using the DigitalOcean command line tool.
Also, make sure that port 6443
is open for incoming connections for your nodes in your firewall.
Ensure you are logged in as root (sudo su -
) when executing the commands in this tutorial.
Configure the Nodes #
Note: The following should be run on all nodes.
Start by upgrading the system:
# apt update && apt upgrade -y
If the kernel was upgraded when running apt upgrade -y
, reboot:
# reboot
Next, add the hostnames and private IP addresses to the /etc/hosts
file (replace example.com
with your domain):
# echo "11.11.11.11 node-1.example.com" >> /etc/hosts # 11.11.11.11 is the private IP of node 1 (master)
# echo "22.22.22.22 node-2.example.com" >> /etc/hosts # 22.22.22.22 is the private IP of node 2 (worker)
# echo "33.33.33.33 node-3.example.com" >> /etc/hosts # 33.33.33.33 is the private IP of node 3 (worker)
Deploy the Kubernetes Master #
Note: The following should be run on the master node.
Log back into the designated master node; in this tutorial, it’s node-1.example.com
.
Run the following command to install k3s
as master, with the private IP address for internal cluster use and exposing the public IP address for external use (replace <private ip>
with your private IP address, and <public ip>
with your public IP address):
# curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--advertise-address <private ip> --node-external-ip <public ip>" sh -
Now grab the join
token, which we’ll need to join the workers to the cluster (copy the output of the command):
# cat /var/lib/rancher/k3s/server/node-token
Deploy the Kubernetes Workers #
Note: The following should be run on the worker node(s).
Install k3s
on the worker nodes and join the worker to the cluster. Replace node-1.example.com
below with the hostname of your Kubernetes master, MY_K3S_TOKEN
with the join
token you grabbed in the previous part of the tutorial, and <public ip>
with the public IP of your worker node:
Run:
# curl -sfL https://get.k3s.io | K3S_URL=https://node-1.example.com:6443 INSTALL_K3S_EXEC="--node-external-ip <public ip>" K3S_TOKEN="MY_K3S_TOKEN" sh -
Deploy Rio Service Mesh #
At the time of writing, there is a bug preventing Rio from installing on the latest version of
k3s
(0.10.2)!
Note: The following should be run on the master node.
On your master node, install rio
by running the following commands:
# curl -sfL https://get.rio.io | sh -
# rio install
To make sure the Rio service mesh pods are up and running, run the following:
# kubectl get po -n rio-system
Run the following to use your own domain name instead of example.onrio.io
:
rio domain register www.example.com default/route1
Troubleshooting #
- Make sure the required ports are open in your firewall
- Check the k3s and Rio issue trackers to see if you’ve encountered a bug (at the time of writing, there is a bug in Rio that may prevent it from adequately installing)
Last Words #
That should do it! 😀 You should have a k3s
cluster running with a Rio-supplied service mesh.
If you plan on using this cluster for anything serious, you should continue configuring firewalls and taking other measures to ensure your cluster is secure.
To learn more about Kubernetes, check out the Kubernetes documentation, the k3s documentation, and maybe read a few books:
- Kubernetes: Up and Running: Dive into the Future of Infrastructure
- Cloud Native DevOps with Kubernetes: Building, Deploying, and Scaling Modern Applications in the Cloud
- The Kubernetes Book
Audible has many books on Kubernetes. If you sign up using this link , you’ll get 30 days for free!
Hope you learned something reading through this tutorial! 😊 In future tutorials, we’ll configure a k3s
cluster with High Availability (HA) features.
Revision #
2023-08-31 Revised language, fixed commands